skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Chi, M"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper examines procedural and conditional metacognitive knowledge and student motivation across two ITSs (logic and probability). Students were categorized by metacognitive knowledge and motivation level. Interventions (nudges and worked examples) supported backward-chaining strategy. Results led to an MMI framework combining metacognitive instruction, motivation, and prompting to support effective knowledge transfer. 
    more » « less
  2. Evaluates DKT models’ ability to track individual knowledge components (KCs) in programming tasks. Proposes two enhancements—adding an explicit KC layer and code features—and shows that the KC layer yields modest improvements in KC-level interpretability, especially when tracking incorrect submissions. 
    more » « less
  3. Presents findings that more DRL-derived pedagogical interventions may not improve outcomes and identifies scenarios where simpler or fewer interventions are more effective. 
    more » « less
  4. Introduces EM‑EDM, an AL framework using expectation‑maximization to model heterogeneous student pedagogical strategies across large continuous state spaces. EM‑EDM outperforms four AL baselines and two DRL policies on two pedagogical action prediction tasks. 
    more » « less
  5. Proposes a DRL-based pedagogical policy to choose when to present or skip training problems in a logic tutor. Four conditions are compared: control, adaptive DRL, random skipping, and DRL with worked-example choice. DRL policy reduces training time while maintaining posttest performance. 
    more » « less
  6. Investigates how examples, nudges, or practice support metacognitive knowledge transfer for factual and procedural learners. Compares learner types across intervention conditions to determine which instructional methods enhance metacognitive strategies and improve transfer. 
    more » « less
  7. A key challenge in e-learning environments like Intelligent Tutoring Systems (ITSs) is to induce effective pedagogical policies efficiently. While Deep Reinforcement Learning (DRL) often suffers from \textbf{\emph{sample inefficiency}} and \textbf{\emph{reward function}} design difficulty, Apprenticeship Learning (AL) algorithms can overcome them. However, most AL algorithms can not handle heterogeneity as they assume all demonstrations are generated with a homogeneous policy driven by a single reward function. Still, some AL algorithms which consider heterogeneity, often can not generalize to large continuous state space and only work with discrete states. In this paper, we propose an expectation-maximization(EM)-EDM, a general AL framework to induce effective pedagogical policies from given optimal or near-optimal demonstrations, which are assumed to be driven by heterogeneous reward functions. We compare the effectiveness of the policies induced by our proposed EM-EDM against four AL-based baselines and two policies induced by DRL on two different but related tasks that involve pedagogical action prediction. Our overall results showed that, for both tasks, EM-EDM outperforms the four AL baselines across all performance metrics and the two DRL baselines. This suggests that EM-EDM can effectively model complex student pedagogical decision-making processes through the ability to manage a large, continuous state space and adapt to handle diverse and heterogeneous reward functions with very few given demonstrations. 
    more » « less
  8. Learning to derive subgoals reduces the gap between experts and students and prepares students for future problem solving. This paper explores a training strategy using backward worked examples (BWE) and backward problem solving (BPS) within an intelligent logic tutor to support backward strategy learning, with analysis of student experience, performance, and proof construction. Results show that students trained with both BWE and BPS outperform those receiving none or only BWE, demonstrating more efficient subgoal derivation. 
    more » « less
  9. Introduces OAT (Offline with Augmented Trajectories), a generative sub-trajectory augmentation method designed to enhance off-policy evaluation accuracy. Experiments across robotics, healthcare, and e-learning show substantial performance gains over baselines. 
    more » « less